Approximation Stability and Boosting
نویسندگان
چکیده
Stability has been explored to study the performance of learning algorithms in recent years and it has been shown that stability is sufficient for generalization and is sufficient and necessary for consistency of ERM in the general learning setting. Previous studies showed that AdaBoost has almost-everywhere uniform stability if the base learner has L1 stability. The L1 stability, however, is too restrictive and we show that AdaBoost becomes constant learner if the base learner is not real-valued learner. Considering that AdaBoost is mostly successful as a classification algorithm, stability analysis for AdaBoost when the base learner is not real-valued learner is an important yet unsolved problem. In this paper, we introduce the approximation stability and prove that approximation stability is sufficient for generalization, and sufficient and necessary for learnability of AERM in the general learning setting. We prove that AdaBoost has approximation stability and thus has good generalization, and an exponential bound for AdaBoost is provided.
منابع مشابه
An Alternative Stability Proof for Direct Adaptive Function Approximation Techniques Based Control of Robot Manipulators
This short note points out an improvement on the robust stability analysis for electrically driven robots given in the paper. In the paper, the author presents a FAT-based direct adaptive control scheme for electrically driven robots in presence of nonlinearities associated with actuator input constraints. However, he offers not suitable stability analysis for the closed-loop system. In other w...
متن کاملAn Alternative Stability Proof for Direct Adaptive Function Approximation Techniques Based Control of Robot Manipulators
This short note points out an improvement on the robust stability analysis for electrically driven robots given in the paper. In the paper, the author presents a FAT-based direct adaptive control scheme for electrically driven robots in presence of nonlinearities associated with actuator input constraints. However, he offers not suitable stability analysis for the closed-loop system. In other w...
متن کاملAPPROXIMATION OF STOCHASTIC PARABOLIC DIFFERENTIAL EQUATIONS WITH TWO DIFFERENT FINITE DIFFERENCE SCHEMES
We focus on the use of two stable and accurate explicit finite difference schemes in order to approximate the solution of stochastic partial differential equations of It¨o type, in particular, parabolic equations. The main properties of these deterministic difference methods, i.e., convergence, consistency, and stability, are separately developed for the stochastic cases.
متن کاملHyers-Ulam stability of Volterra integral equation
We will apply the successive approximation method forproving the Hyers--Ulam stability of a linear integral equation ofthe second kind.
متن کاملBoosting method for local learning
We propose a local boosting method in classification problems borrowing from an idea of the local likelihood method. The proposed method includes a simple device to localization for computational feasibility. We proved the Bayes risk consistency of the local boosting in the framework of PAC learning. Inspection of the proof provides a useful viewpoint for comparing the ordinary boosting and the...
متن کامل